Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Compact and accurate representations of 3D shapes are central to many perception and robotics tasks. State-of-the-art learning-based methods can reconstruct single objects but scale poorly to large datasets. We present a novel recursive implicit representation to efficiently and accurately encode large datasets of complex 3D shapes by recursively traversing an implicit octree in latent space. Our implicit Recursive Octree Auto-Decoder (ROAD) learns a hierarchically structured latent space enabling state-of-the-art reconstruction results at a compression ratio above 99%. We also propose an efficient curriculum learning scheme that naturally exploits the coarse-to-fine properties of the underlying octree spatial representation. We explore the scaling law relating latent space dimension, dataset size, and reconstruction accuracy, showing that increasing the latent space dimension is enough to scale to large shape datasets. Finally, we show that our learned latent space encodes a coarse-to-fine hierarchical structure yielding reusable latents across different levels of details, and we provide qualitative evidence of generalization to novel shapes outside the training set.
translated by 谷歌翻译
Importance: Social determinants of health (SDOH) are known to be associated with increased risk of suicidal behaviors, but few studies utilized SDOH from unstructured electronic health record (EHR) notes. Objective: To investigate associations between suicide and recent SDOH, identified using structured and unstructured data. Design: Nested case-control study. Setting: EHR data from the US Veterans Health Administration (VHA). Participants: 6,122,785 Veterans who received care in the US VHA between October 1, 2010, and September 30, 2015. Exposures: Occurrence of SDOH over a maximum span of two years compared with no occurrence of SDOH. Main Outcomes and Measures: Cases of suicide deaths were matched with 4 controls on birth year, cohort entry date, sex, and duration of follow-up. We developed an NLP system to extract SDOH from unstructured notes. Structured data, NLP on unstructured data, and combining them yielded seven, eight and nine SDOH respectively. Adjusted odds ratios (aORs) and 95% confidence intervals (CIs) were estimated using conditional logistic regression. Results: In our cohort, 8,821 Veterans committed suicide during 23,725,382 person-years of follow-up (incidence rate 37.18 /100,000 person-years). Our cohort was mostly male (92.23%) and white (76.99%). Across the six common SDOH as covariates, NLP-extracted SDOH, on average, covered 84.38% of all SDOH occurrences. All SDOH, measured by structured data and NLP, were significantly associated with increased risk of suicide. The SDOH with the largest effects was legal problems (aOR=2.67, 95% CI=2.46-2.89), followed by violence (aOR=2.26, 95% CI=2.11-2.43). NLP-extracted and structured SDOH were also associated with suicide. Conclusions and Relevance: NLP-extracted SDOH were always significantly associated with increased risk of suicide among Veterans, suggesting the potential of NLP in public health studies.
translated by 谷歌翻译
Autonomous navigation in crowded spaces poses a challenge for mobile robots due to the highly dynamic, partially observable environment. Occlusions are highly prevalent in such settings due to a limited sensor field of view and obstructing human agents. Previous work has shown that observed interactive behaviors of human agents can be used to estimate potential obstacles despite occlusions. We propose integrating such social inference techniques into the planning pipeline. We use a variational autoencoder with a specially designed loss function to learn representations that are meaningful for occlusion inference. This work adopts a deep reinforcement learning approach to incorporate the learned representation for occlusion-aware planning. In simulation, our occlusion-aware policy achieves comparable collision avoidance performance to fully observable navigation by estimating agents in occluded spaces. We demonstrate successful policy transfer from simulation to the real-world Turtlebot 2i. To the best of our knowledge, this work is the first to use social occlusion inference for crowd navigation.
translated by 谷歌翻译
我们研究了密集和互动人群中安全和意图意识到的机器人导航的问题。大多数以前的强化学习(RL)方法无法考虑所有代理之间的不同类型的相互作用或忽略人的意图,从而导致绩效降级。在本文中,我们提出了一个新型的复发图神经网络,具有注意机制,以通过空间和时间捕获代理之间的异质相互作用。为了鼓励长远的机器人行为,我们通过预测其未来的轨迹在几个时间段中来推断动态代理的意图。预测被纳入无模型的RL框架中,以防止机器人侵入其他试剂的预期路径。我们证明我们的方法使机器人能够在挑战人群导航方案中实现良好的导航性能和无侵入性。我们成功地将模拟中学到的政策转移到了现实世界中的Turtlebot 2i。
translated by 谷歌翻译
在机器人上应用增强学习(RL)方法通常涉及培训模拟和部署现实世界中的机器人的政策。由于现实世界和模拟器之间的模型不匹配,以这种方式部署的RL代理商倾向于逐渐执行。为了解决这个问题,研究人员制定了强大的政策学习算法,依赖于合成噪声干扰。但是,这些方法在目标环境中不保证性能。我们提出了一种凸起风险最小化算法,以估计模拟器和目标域之间的模型不匹配使用来自两个环境的轨迹数据。我们表明该估计器可以随着模拟器使用,以评估目标域中的RL代理的性能,有效地弥合这两个环境之间的差距。我们还表明,我们的估算器的收敛速度为$ {n { - 1/4}} $,其中$ n $是培训样本的数量。在仿真中,我们展示了我们的方法如何有效地近似和评估GridWorld,Cartpole和Reverser环境的性能。我们还表明,我们的方法能够使用模拟器和远程收集来自现实世界中的机器人的远程收集的数据来估计7 DOF机器人手臂的性能。
translated by 谷歌翻译
前列腺癌是美国男人的第二致致命癌症。虽然磁共振成像(MRI)越来越多地用于引导前列腺癌诊断的靶向活组织检查,但其效用仍然受到限制,因为假阳性和假否定的高率以及较低的读者协议。机器学习方法在前列腺MRI上检测和定位癌症可以帮助标准化放射科学诠释。然而,现有的机器学习方法不仅在模型架构中不等,而且还可以在用于模型培训的地面真理标签策略中。在这项研究中,我们比较不同的标记策略,即病理证实放射科标签,整个安装组织病理学图像上的病理学家标签,以及病变水平和像素级数字病理学家标签(先前验证了组织病理学图像上的深层学习算法以预测像素 - 整个安装组织病理学图像上的Gleason模式)。我们分析这些标签对训练有素的机器学习模型的性能的影响。我们的实验表明,用它们培训的(1)放射科标签和模型可能会错过癌症,或低估癌症程度,(2)与他们培训的数字病理学家标签和模型与病理学家标签有高度的一致性,而(3)用数字病理学家培训的模型标签在两种不同疾病分布的两种不同群组中达到最佳性能,而不管使用的模型建筑如何。数字病理学家标签可以减少与人类注释相关的挑战,包括劳动力,时间,和读者间变异性,并且可以通过使可靠的机器学习模型进行培训来检测和定位前列腺癌,帮助弥合前列腺放射学和病理学之间的差距在MRI。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
We address the problem of unsupervised domain adaptation when the source domain differs from the target domain because of a shift in the distribution of a latent subgroup. When this subgroup confounds all observed data, neither covariate shift nor label shift assumptions apply. We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain, and unlabeled data from the target. The identification results are constructive, immediately suggesting an algorithm for estimating the optimal predictor in the target. For continuous observations, when this algorithm becomes impractical, we propose a latent variable model specific to the data generation process at hand. We show how the approach degrades as the size of the shift changes, and verify that it outperforms both covariate and label shift adjustment.
translated by 谷歌翻译
Scholarly text is often laden with jargon, or specialized language that divides disciplines. We extend past work that characterizes science at the level of word types, by using BERT-based word sense induction to find additional words that are widespread but overloaded with different uses across fields. We define scholarly jargon as discipline-specific word types and senses, and estimate its prevalence across hundreds of fields using interpretable, information-theoretic metrics. We demonstrate the utility of our approach for science of science and computational sociolinguistics by highlighting two key social implications. First, we measure audience design, and find that most fields reduce jargon when publishing in general-purpose journals, but some do so more than others. Second, though jargon has varying correlation with articles' citation rates within fields, it nearly always impedes interdisciplinary impact. Broadly, our measurements can inform ways in which language could be revised to serve as a bridge rather than a barrier in science.
translated by 谷歌翻译